Telecom fraud has emerged as one of the most pressing challenges in the criminal field. With advancements in artificial intelligence, telecom fraud texts have become increasingly covert and deceptive. Existing prevention methods, such as mobile number tracking, detection, and traditional machine-learning-based text recognition, struggle in terms of their real-time performance in identifying telecom fraud. Additionally, the scarcity of Chinese telecom fraud text data has limited research in this area. In this paper, we propose a telecom fraud text detection model, RoBERTa-MHARC, which combines RoBERTa with a multi-head attention mechanism and residual connections. First, the model selects data categories from the CCL2023 telecom fraud dataset as basic samples and merges them with collected telecom fraud text data, creating a five-category dataset covering impersonation of customer service, impersonation of leadership acquaintances, loans, public security fraud, and normal text. During training, the model integrates a multi-head attention mechanism and enhances its training efficiency through residual connections. Finally, the model improves its multi-class classification accuracy by incorporating an inconsistency loss function alongside the cross-entropy loss. The experimental results demonstrate that our model performs well on multiple benchmark datasets, achieving an F1 score of 97.65 on the FBS dataset, 98.10 on our own dataset, and 93.69 on the news dataset.
Loading....